-
Notifications
You must be signed in to change notification settings - Fork 697
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] 486 Add example model package #487
Conversation
Signed-off-by: Nic Ma <nma@nvidia.com>
24c93db
to
fefa39a
Compare
Signed-off-by: Nic Ma <nma@nvidia.com>
Signed-off-by: Nic Ma <nma@nvidia.com>
Signed-off-by: Nic Ma <nma@nvidia.com>
Signed-off-by: Nic Ma <nma@nvidia.com>
I didn't implement any config-parsing related logic in it so far, marked as Thanks. |
Signed-off-by: Nic Ma <nma@nvidia.com>
Interesting comments from @ericspod during the online meeting:
Thanks. |
What I was thinking was, from the perspective of a user, the commit hash could be used to determine model versions. It wouldn't be included in the package but would be used to tell model versions apart that claim to be the same version, ie. if the user forgets to increment the version. |
Thanks for your explanation, makes sense to me! |
Signed-off-by: Nic Ma <nma@nvidia.com>
Slightly adjusted the folder structurer, thanks for the reference of MONAI application: Thanks. |
Thanks for several internal discussions, I updated the example model package to show new ideas:
Could you please help take a look again? Thanks in advance. |
Signed-off-by: Nic Ma <nma@nvidia.com>
18801c6
to
1b01dad
Compare
Signed-off-by: Nic Ma <nma@nvidia.com>
Signed-off-by: Nic Ma <nma@nvidia.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure why we need shell scripts in the model package... they're not easily portable across operating systems, and not descriptive in what they each achieve. Wouldn't in make more sense to remove any shell scripts and implement functionality in MONAI that is able to interpret/generate the JSON artifacts via a CLI based on the framework being used?
Hi @aihsani , Thanks for your suggestion!
And you are right, we should definitely not put complicated logic in the Thanks. |
Signed-off-by: Nic Ma <nma@nvidia.com>
Signed-off-by: Nic Ma <nma@nvidia.com>
I added Thanks. |
Signed-off-by: Nic Ma <nma@nvidia.com>
Signed-off-by: Nic Ma <nma@nvidia.com>
Hi @wyli , I changed the Thanks. |
Signed-off-by: Nic Ma <nma@nvidia.com>
Signed-off-by: Nic Ma <nma@nvidia.com>
Signed-off-by: Nic Ma <nma@nvidia.com>
Hi @ericspod @wyli @rijobro @aihsani , I have updated this MMAR example according to our online discussion yesterday. Thanks. |
To recap what we had discussed about actual file structure, let's say we have a directory with at least these files:
We can consider this directory the model package itself. The metadata.json would have in it the information that is currently provided in this PR, plus it should also have the arguments to instantiate the network rather than that being in the inference config. If the model isn't compatible with Torchscript then the mode.ts file will be absent and this will be needed to recreate the network before loading the stored weights. This directory can then be packaged into a zip file whose structure can be assumed with |
closing in favour of https://github.com/Project-MONAI/tutorials/tree/master/modules/bundles |
Fixes #486 .
Description
This PR implemented a draft example of the model package for discussion, leverage @ericspod 's great work to save data: Project-MONAI/MONAI#3138
Basic principles
(1) Structured sharable data: Define meta data and components in structured files, like JSON or YAML. With predefined schema to verify the JSON configs, follow: https://json-schema.org/. We can put the base common schema in some public web storage in the future.
(2) TorchScript package: Export model weights, meta data, components config, etc. into TorchScript file.
(3) Hybrid programming: For 90% cases, only the JSON configs should be enough. For some advanced cases, the developer of model package defines which parts of the workflow should be shared for others to reconstruct other logic, then define these parts in JSON or YAML config, and can implement other customized logic in the python program. For example, this PR defines
transforms
,dataset
,dataloader
,inferer
, etc. in theinference.json
config, implements a task-specific special inference logic with native PyTorch program ininference.py
, other teams can easily leverage the config file to reconstruct necessary components and implement their specific inference logic.(4) Data sharing: the python program can leverage the
ConfigParser
to get constructed instance or raw config items from config file then lazy instantiation, configs can refer / extend other items in a file or even other files.Structure of the package
(1) commands: start point to execute, like:
export.sh
to export the network weights, meta, config, etc. to TorchScript model,inference.sh
to load TorchScript model, construct instances and execute inference. We should enhance it to something likemonairun
to easily support all platforms and environments.(2) configs: structure config for shareable components or information, can be JSON or YAML, like:
metadata.json
record the necessary meta information of the model package,inference.json
define the args and components for inference usage.(3) docs: necessary documents and images, etc. like:
README.md
,lisence.txt
,tensorboard.png
, etc. depends on the model context.(4) models: pretrained model weights and exported TorchScript model file.
(5) [Optional] scripts: if the model package has customized python program, like
inference.py
leverage the structured components to construct a special inference program.Status
Work in progress
Checks
./runner [-p <regex_pattern>]